visual noise
Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning
Ge, Yuyao, Liu, Shenghua, Wang, Yiwei, Mei, Lingrui, Bi, Baolong, Zhou, Xuanshan, Yao, Jiayu, Guo, Jiafeng, Cheng, Xueqi
Vision-Language Models (VLMs) have demonstrated remarkable success across diverse visual tasks, yet their performance degrades in complex visual environments. While existing enhancement approaches require additional training, rely on external segmentation tools, or operate at coarse-grained levels, they overlook the innate ability within VLMs. To bridge this gap, we investigate VLMs' attention patterns and discover that: (1) visual complexity strongly correlates with attention entropy, negatively impacting reasoning performance; (2) attention progressively refines from global scanning in shallow layers to focused convergence in deeper layers, with convergence degree determined by visual complexity. Building on these insights, we propose Contrastive Attention Refinement for Visual Enhancement (CARVE), a training-free method that extracts task-relevant visual signals through attention contrasting at the pixel level. Extensive experiments demonstrate that CARVE consistently enhances performance, achieving up to 75% improvement on open-source models. Our work provides critical insights into the interplay between visual complexity and attention mechanisms, offering an efficient pathway for improving visual reasoning with contrasting attention. Vision-Language Models (VLMs) have achieved remarkable success across diverse tasks (Radford et al., 2021; Jia et al., 2021; Alayrac et al., 2022).
- Europe > Austria > Vienna (0.14)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- North America > United States > California > Merced County > Merced (0.04)
- (2 more...)
- Research Report (0.82)
- Overview (0.68)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.49)
Pentagon launches tech to stop AI-powered killing machines from going rogue on the battlefield due to robot-fooling visual 'noise'
Pentagon officials have sounded the alarm about'unique classes of vulnerabilities for AI or autonomous systems,' which they hope new research can fix. The program, dubbed Guaranteeing AI Robustness against Deception (GARD), has been tasked since 2022 with identifying how visual data or other electronic signals inputs for AI might be gamed by the calculated introduction of noise. Computer scientists with one of GARD's defense contractors have experimented with kaleidoscopic patches designed to fool AI-based systems into making false IDs. 'You can essentially, by adding noise to an image or a sensor, perhaps break a downstream machine learning algorithm,' as one senior Pentagon official managing the research explained Wednesday. The news comes as fears that the Pentagon has been'building killer robots in the basement' have allegedly led to stricter AI rules for the US military -- mandating that all systems must be approved before deployment.
- North America > United States (1.00)
- Asia > Taiwan (0.05)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)